首页> 外文OA文献 >Online and Stochastic Universal Gradient Methods for Minimizing Regularized H'older Continuous Finite Sums
【2h】

Online and Stochastic Universal Gradient Methods for Minimizing Regularized H'older Continuous Finite Sums

机译:最小化的在线和随机通用梯度方法   规范H \“较旧的连续有限和

摘要

Online and stochastic gradient methods have emerged as potent tools in largescale optimization with both smooth convex and nonsmooth convex problems fromthe classes $C^{1,1}(\reals^p)$ and $C^{1,0}(\reals^p)$ respectively. Howeverto our best knowledge, there is few paper to use incremental gradient methodsto optimization the intermediate classes of convex problems with H\"oldercontinuous functions $C^{1,v}(\reals^p)$. In order fill the difference and gapbetween methods for smooth and nonsmooth problems, in this work, we propose theseveral online and stochastic universal gradient methods, that we do not needto know the actual degree of smoothness of the objective function in advance.We expanded the scope of the problems involved in machine learning to H\"oldercontinuous functions and to propose a general family of first-order methods.Regret and convergent analysis shows that our methods enjoy strong theoreticalguarantees. For the first time, we establish an algorithms that enjoys a linearconvergence rate for convex functions that have H\"older continuous gradients.
机译:在线和随机梯度方法已成为大规模优化中的有效工具,它们来自类$ C ^ {1,1}(\ reals ^ p)$和$ C ^ {1,0}(\ reals ^ p)$。然而,据我们所知,很少有论文使用增量梯度法来优化带有H \“旧连续函数$ C ^ {1,v}(\ reals ^ p)$的凸问题的中间类。为了填补两者之间的差异和空白平滑和非平滑问题的方法,在这项工作中,我们提出了这些在线和随机的通用梯度方法,我们不需要事先知道目标函数的实际平滑度。我们扩大了机器学习中涉及的问题的范围对H'旧连续函数并提出一类一般的一阶方法。遗憾和收敛的分析表明,我们的方法具有很强的理论保证。首次,我们建立了一种算法,该算法对具有H \“较旧的连续梯度的凸函数享有线性收敛速度。

著录项

  • 作者

    Shi, Ziqiang; Liu, Rujie;

  • 作者单位
  • 年度 2014
  • 总页数
  • 原文格式 PDF
  • 正文语种 {"code":"en","name":"English","id":9}
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号